8 research outputs found

    Early and Late Stage Mechanisms for Vocalization Processing in the Human Auditory System

    Get PDF
    The human auditory system is able to rapidly process incoming acoustic information, actively filtering, categorizing, or suppressing different elements of the incoming acoustic stream. Vocalizations produced by other humans (conspecifics) likely represent the most ethologically-relevant sounds encountered by hearing individuals. Subtle acoustic characteristics of these vocalizations aid in determining the identity, emotional state, health, intent, etc. of the producer. The ability to assess vocalizations is likely subserved by a specialized network of structures and functional connections that are optimized for this stimulus class. Early elements of this network would show sensitivity to the most basic acoustic features of these sounds; later elements may show categorically-selective response patterns that represent high-level semantic organization of different classes of vocalizations. A combination of functional magnetic resonance imaging and electrophysiological studies were performed to investigate and describe some of the earlier and later stage mechanisms of conspecific vocalization processing in human auditory cortices. Using fMRI, cortical representations of harmonic signal content were found along the middle superior temporal gyri between primary auditory cortices along Heschl\u27s gyri and the superior temporal sulci, higher-order auditory regions. Additionally, electrophysiological findings also demonstrated a parametric response profile to harmonic signal content. Utilizing a novel class of vocalizations, human-mimicked versions of animal vocalizations, we demonstrated the presence of a left-lateralized cortical vocalization processing hierarchy to conspecific vocalizations, contrary to previous findings describing similar bilateral networks. This hierarchy originated near primary auditory cortices and was further supported by auditory evoked potential data that suggests differential temporal processing dynamics of conspecific human vocalizations versus those produced by other species. Taken together, these results suggest that there are auditory cortical networks that are highly optimized for processing utterances produced by the human vocal tract. Understanding the function and structure of these networks will be critical for advancing the development of novel communicative therapies and the design of future assistive hearing devices

    Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes

    Get PDF
    Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds

    RNA–protein binding kinetics in an automated microfluidic reactor

    Get PDF
    Microfluidic chips can automate biochemical assays on the nanoliter scale, which is of considerable utility for RNA–protein binding reactions that would otherwise require large quantities of proteins. Unfortunately, complex reactions involving multiple reactants cannot be prepared in current microfluidic mixer designs, nor is investigation of long-time scale reactions possible. Here, a microfluidic ‘Riboreactor’ has been designed and constructed to facilitate the study of kinetics of RNA–protein complex formation over long time scales. With computer automation, the reactor can prepare binding reactions from any combination of eight reagents, and is optimized to monitor long reaction times. By integrating a two-photon microscope into the microfluidic platform, 5-nl reactions can be observed for longer than 1000 s with single-molecule sensitivity and negligible photobleaching. Using the Riboreactor, RNA–protein binding reactions with a fragment of the bacterial 30S ribosome were prepared in a fully automated fashion and binding rates were consistent with rates obtained from conventional assays. The microfluidic chip successfully combines automation, low sample consumption, ultra-sensitive fluorescence detection and a high degree of reproducibility. The chip should be able to probe complex reaction networks describing the assembly of large multicomponent RNPs such as the ribosome
    corecore